-
Notifications
You must be signed in to change notification settings - Fork 184
Add gql compiler #185
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add gql compiler #185
Conversation
1bb8212
to
3a237da
Compare
Codecov Report
@@ Coverage Diff @@
## master #185 +/- ##
===========================================
- Coverage 100.00% 91.86% -8.14%
===========================================
Files 14 20 +6
Lines 949 1549 +600
===========================================
+ Hits 949 1423 +474
- Misses 0 126 +126
Continue to review full report at Codecov.
|
0f1af3b
to
942ffb0
Compare
942ffb0
to
1c738f0
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is quite nice but the script does not work for me, it seems a file is missing.
$ gql-compiler
Traceback (most recent call last):
File "/home/leszek/miniconda3/envs/gql-dev/bin/gql-compiler", line 9, in <module>
from gql.compiler.utils_schema import compile_schema_library
ModuleNotFoundError: No module named 'gql.compiler.utils_schema'
Could you also please move variables.py and enum_utils.py in a compiler/runtime folder ?
Of course this is quite a big PR so I would like first to release a stable 3.0 version which contains custom scalars (I'll take care that it would be compatible with this code)
Note that I would prefer if you don't force-push the modifications, the code will be squashed anyway when the PR will be merged.
Hi @leszekhanusz thanks for the great comments. |
In about a month I'd say. |
@leszekhanusz I see that release 3.0 is delayed. |
@naormatania Sorry about that. I had less time than expected to work on gql. I made some research about the best way to integrate custom scalar types but it can get pretty complicated to think about everything so I admit I procrastinated a bit about that. We should support parsing custom scalars in the input like you do with your Custom scalars already exists in graphql-core in the schema (GraphQLScalarType), so we should use this definition with We should be able to reconstruct the full schema with the custom scalar definitions from the instrospection in addition to the provided custom scalars. We also need to think about the best way to import custom scalars packages, in a way that works correctly for client or server. Then all that needs to be integrated afterwards in your code. |
@naormatania I really like this. I wrote my own compiler doing a simpler set up, but after looking at your code, It's not clear to me how can you define custom fields that you would want to return in a query. Can you show me a example with the DogQuery? |
Sure @isaulv. I'll post here an example and let me know if you have more questions. scalar DateTime
type Dog {
id: ID!
name: String
time: DateTime
}
type Query {
dog(id: String!): Dog
}
schema {
query: Query
}
I also define how this custom scalar is encoded\decoded in python file I add to the project from datetime import datetime, timedelta, timezone, tzinfo
from typing import Dict, Optional, Tuple
from marshmallow import fields as marshmallow_fields
from gql.compiler.renderer_dataclasses import CustomScalar
class simple_utc(tzinfo):
def tzname(self, dt: Optional[datetime]) -> Optional[str]:
return "UTC"
def utcoffset(self, dt: Optional[datetime]) -> Optional[timedelta]:
return timedelta(0)
def isoformat(time: datetime) -> str:
return datetime.isoformat(time.replace(tzinfo=simple_utc()))
# Helpers for parsing the result of isoformat()
def _parse_isoformat_date(dtstr: str) -> Tuple[int, int, int]:
# It is assumed that this function will only be called with a
# string of length exactly 10, and (though this is not used) ASCII-only
year = int(dtstr[0:4])
if dtstr[4] != "-":
raise ValueError("Invalid date separator: %s" % dtstr[4])
month = int(dtstr[5:7])
if dtstr[7] != "-":
raise ValueError("Invalid date separator")
day = int(dtstr[8:10])
return (year, month, day)
def _parse_hh_mm_ss_ff(tstr: str) -> Tuple[int, int, int, int]:
# Parses things of the form HH[:MM[:SS[.fff[fff]]]]
len_str = len(tstr)
time_comps = [0, 0, 0, 0]
pos = 0
for comp in range(0, 3):
if (len_str - pos) < 2:
raise ValueError("Incomplete time component")
time_comps[comp] = int(tstr[pos : pos + 2]) # noqa
pos += 2
next_char = tstr[pos : pos + 1] # noqa
if not next_char or comp >= 2:
break
if next_char != ":":
raise ValueError("Invalid time separator: %c" % next_char)
pos += 1
if pos < len_str:
if tstr[pos] != ".":
raise ValueError("Invalid microsecond component")
else:
pos += 1
len_remainder = len_str - pos
if len_remainder not in (3, 6):
raise ValueError("Invalid microsecond component")
time_comps[3] = int(tstr[pos:])
if len_remainder == 3:
time_comps[3] *= 1000
return (time_comps[0], time_comps[1], time_comps[2], time_comps[3])
def _parse_isoformat_time(tstr: str) -> Tuple[int, int, int, int, Optional[tzinfo]]:
# Format supported is HH[:MM[:SS[.fff[fff]]]][+HH:MM[:SS[.ffffff]]]
len_str = len(tstr)
if len_str < 2:
raise ValueError("Isoformat time too short")
# This is equivalent to re.search('[+-]', tstr), but faster
tz_pos = tstr.find("-") + 1 or tstr.find("+") + 1
timestr = tstr[: tz_pos - 1] if tz_pos > 0 else tstr
time_comps = _parse_hh_mm_ss_ff(timestr)
tzi = None
if tz_pos > 0:
tzstr = tstr[tz_pos:]
# Valid time zone strings are:
# HH:MM len: 5
# HH:MM:SS len: 8
# HH:MM:SS.ffffff len: 15
if len(tzstr) not in (5, 8, 15):
raise ValueError("Malformed time zone string")
tz_comps = _parse_hh_mm_ss_ff(tzstr)
if all(x == 0 for x in tz_comps):
tzi = timezone.utc
else:
tzsign = -1 if tstr[tz_pos - 1] == "-" else 1
td = timedelta(
hours=tz_comps[0],
minutes=tz_comps[1],
seconds=tz_comps[2],
microseconds=tz_comps[3],
)
tzi = timezone(tzsign * td)
return (*time_comps, tzi)
def fromisoformat(date_string: str) -> datetime:
"""Construct a datetime from the output of datetime.isoformat()."""
if not isinstance(date_string, str):
raise TypeError("fromisoformat: argument must be str")
# Split this at the separator
dstr = date_string[0:10]
tstr = date_string[11:]
try:
date_components = _parse_isoformat_date(dstr)
except ValueError:
raise ValueError(f"Invalid isoformat string: {date_string!r}")
if tstr:
try:
time_components = _parse_isoformat_time(tstr)
except ValueError:
raise ValueError(f"Invalid isoformat string: {date_string!r}")
else:
time_components = (0, 0, 0, 0, None)
return datetime(*(date_components + time_components))
custom_scalars: Dict[str, CustomScalar] = {
"DateTime": CustomScalar(
name="DateTime",
type=datetime,
encoder=isoformat,
decoder=fromisoformat,
mm_field=marshmallow_fields.DateTime(format="iso"),
),
} Now let's define a query file to compile query DogQuery($id: String!) {
dog(id: $id) {
id
name
time
}
} In order to compile it I run #!/usr/bin/env python3
# @generated AUTOGENERATED file. Do not Change!
from dataclasses import dataclass, field as _field
from ..config import custom_scalars, datetime
from gql.compiler.runtime.variables import encode_variables
from gql import gql, Client
from gql.transport.exceptions import TransportQueryError
from functools import partial
from numbers import Number
from typing import Any, AsyncGenerator, Dict, List, Generator, Optional
from dataclasses_json import DataClassJsonMixin, config
# fmt: off
QUERY: List[str] = ["""
query DogQuery($id: String!) {
dog(id: $id) {
id
name
time
}
}
"""
]
class DogQuery:
@dataclass(frozen=True)
class DogQueryData(DataClassJsonMixin):
@dataclass(frozen=True)
class Dog(DataClassJsonMixin):
id: str
name: Optional[str]
time: Optional[datetime] = _field(metadata=config(encoder=custom_scalars["DateTime"].encoder, decoder=custom_scalars["DateTime"].decoder, mm_field=custom_scalars["DateTime"].mm_field))
dog: Optional[Dog]
# fmt: off
@classmethod
def execute(cls, client: Client, id: str) -> Optional[DogQueryData.Dog]:
variables: Dict[str, Any] = {"id": id}
new_variables = encode_variables(variables, custom_scalars)
response_text = client.execute(
gql("".join(set(QUERY))), variable_values=new_variables
)
res = cls.DogQueryData.from_dict(response_text)
return res.dog
# fmt: off
@classmethod
async def execute_async(cls, client: Client, id: str) -> Optional[DogQueryData.Dog]:
variables: Dict[str, Any] = {"id": id}
new_variables = encode_variables(variables, custom_scalars)
response_text = await client.execute_async(
gql("".join(set(QUERY))), variable_values=new_variables
)
res = cls.DogQueryData.from_dict(response_text)
return res.dog |
So no plans to merge this in, in the near future as it is ? It would be a shame for this work to go to waste. Is there maybe a middle ground solution that could be applied here ? e.g. merge this is with support for default scalar type and a warning that custom scalars are subject to change later on ? |
I hate to be the bearer of bad news but after much hesitation it has been decided finally to leave code generation code outside of this repo. |
The compiler creates from strongly typed classes from graphql query files.
It supports sync/async functions, all operations (query, mutations, subscriptions) and other primitives like fragments, inputs and enums.
This PR is based on https://pypi.org/project/py-gql-client/ written in facebook that was based originally on https://github.com/graphql-python/gql-next.
For more details on usage, please refer to documentation in doc/advanced/compiler.rst